在一系列软物质系统中广泛观察到玻璃过渡。但是,尽管有多年的研究,这些转变的物理机制仍然未知。特别是,一个重要的未解决的问题是玻璃转变是否伴随着特征静态结构的相关长度的分歧。最近,提出了一种可以从纯精度的纯静态信息中预测长期动态的方法。但是,即使是这种方法也不通用,并且对于KOB(Andersen系统)而言,这是典型的玻璃形成液体模型。在这项研究中,我们开发了一种使用机器学习或尤其是卷积神经网络提取眼镜的特征结构的方法。特别是,我们通过量化网络做出的决策的理由来提取特征结构。我们考虑了两个质量不同的玻璃形成二进制系统,并通过与几个既定结构指标进行比较,我们证明我们的系统可以识别依赖于系统细节的特征结构。令人惊讶的是,提取的结构与热波动中的非平衡衰老动力学密切相关。
translated by 谷歌翻译
鉴于音乐源分离和自动混合的最新进展,在音乐曲目中删除音频效果是开发自动混合系统的有意义的一步。本文着重于消除对音乐制作中吉他曲目应用的失真音频效果。我们探索是否可以通过设计用于源分离和音频效应建模的神经网络来解决效果的去除。我们的方法证明对混合处理和清洁信号的效果特别有效。与基于稀疏优化的最新解决方案相比,这些模型获得了更好的质量和更快的推断。我们证明这些模型不仅适合倾斜,而且适用于其他类型的失真效应。通过讨论结果,我们强调了多个评估指标的有用性,以评估重建的不同方面的变形效果去除。
translated by 谷歌翻译
在对比的表示学习中,训练数据表示,使得即使在通过增强的图像改变时,它也可以对图像实例进行分类。然而,根据数据集,一些增强可以损坏超出识别的图像的信息,并且这种增强可以导致折叠表示。我们通过将随机编码过程正式化,其中通过增强的数据损坏与由编码器保留的信息之间存在脱疣,对该问题提出了部分解决方案。我们展示了基于此框架的InfoMax目标,我们可以学习增强的数据依赖分布,以避免表示的崩溃。
translated by 谷歌翻译
分销(OOD)泛化问题的目标是培训推广所有环境的预测因子。此字段中的流行方法使用这样的假设,即这种预测器应为\ Texit {不变预测器},该{不变预测仪}捕获跨环境仍然不变的机制。虽然这些方法在各种案例研究中进行了实验成功,但仍然有很多关于这一假设的理论验证的空间。本文介绍了一系列不变预测因素所必需的一系列理论条件,以实现ood最优性。我们的理论不仅适用于非线性案例,还概括了\ CiteT {Rojas2018Invariant}中使用的必要条件。我们还从我们的理论中得出渐变对齐算法,并展示了\ Citet {Aubinlinear}提出的三个\ Texit {不变性单元测试}中的两种竞争力。
translated by 谷歌翻译
The purpose of this study is to introduce new design-criteria for next-generation hyperparameter optimization software. The criteria we propose include (1) define-by-run API that allows users to construct the parameter search space dynamically, (2) efficient implementation of both searching and pruning strategies, and (3) easy-to-setup, versatile architecture that can be deployed for various purposes, ranging from scalable distributed computing to light-weight experiment conducted via interactive interface. In order to prove our point, we will introduce Optuna, an optimization software which is a culmination of our effort in the development of a next generation optimization software. As an optimization software designed with define-by-run principle, Optuna is particularly the first of its kind. We will present the design-techniques that became necessary in the development of the software that meets the above criteria, and demonstrate the power of our new design through experimental results and real world applications. Our software is available under the MIT license (https://github.com/pfnet/optuna/).
translated by 谷歌翻译
One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_ projection.
translated by 谷歌翻译
We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (Im-ageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.
translated by 谷歌翻译
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only "virtually" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward-and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.
translated by 谷歌翻译